Introduction to This Special Issue on Multimodal Interfaces
نویسندگان
چکیده
The growing emphasis on multimodal interface design is fundamentally inspired by the aim to support natural, flexible, efficient, and powerfully expressive means of human-computer interaction that are easy to learn and use. Multimodal interfaces represent a new direction for computing that draws from the myriad input and output technologies becoming available, and that potentially can integrate complementary modalities to yield more synergistic blends than has been possible previously. It is a research direction that seeks guidance from cognitive science expertise on the coordinated human perception and production of naturally co-occurring modalities (e.g., speech, gesture, gaze, and facial movements) during interaction with people and computers. In fact, the realization of successful multimodal systems is dependent on and only can flourish through extensive interdisciplinary cooperation, as well as teamwork among those representing expertise in the individual component technologies. In this special issue, new work on next-generation multimodal interfaces includes articles based on PhD theses by Elizabeth Mynatt of Xerox Palo Alto Research Center, Robert Stevens of the University of York, and Yi Han of the University of Melbourne. It also includes articles based on extensive work by Steve Roth's group at Carnegie Mellon University and from Sharon Oviatt's laboratory at the Oregon Graduate Institute of Science and Technology. Reading these articles, it becomes clear that multimodal interface in no sense refers to a unified and specialized subarea of research. Rather, it is a collection of emerging research areas, like early evening stars gradually glowing more brightly. Some of the clusters already visible include multimodal interfaces that support computing for special populations (e.g., Mynatt, this issue; Stevens, Edwards, & Harling, this issue), research on coordinated multimodal input (e.g., Oviatt, this issue), and work on presentation planning and coordinated multimodal output (e.g., Han & Zukerman, this issue; Roth, Chuah, Kerpedjiev, Kolojejchick, & Lucas, this issue). In addition, new methods for generating research and system development are becoming available, including multimodal simulation techniques for collecting data (e.g., Oviatt, this issue) and agent architectures for implementing integrated multimodal systems (e.g., Han & Zukerman, this issue). Multimodal interfaces have the potential to expand computing to encompass more challenging applications, for use by a broader spectrum of the population, and during more adverse usage conditions. It is not coincidental, then, that most of the articles in this issue present work in support of challenging applications; these include algebra instruction (Stevens et al., this issue), presentation of data summaries (Han & Zukerman, this issue; Roth et al., this issue), and interaction with complex spatial displays such as maps (Oviatt, this issue; Roth et al., this issue). In addition, two articles
منابع مشابه
Special issue on Knowledge-based Modes of Human-Computer Interaction
This special issue on “Knowledge-based Modes of Human-Computer Interaction” aims at presenting some novel ways and relevant challenging issues of how users interact with computers in knowledge-based environments. As computer users are spreading and include people of all ages, backgrounds, professions, education levels, aims, profiles, preferences and personalities, human-computer interaction ha...
متن کاملTowards Multimodal User Interfaces Composition Based on UsiXML and MBD Principles
In software design, the reuse issue brings the increasing of web services, components and others techniques. These techniques allow reusing code associated to technical aspect (as software component). With the development of business components which can integrate technical aspect with HCI, the composition issue has appeared. Our previous work concerned the GUI composition based on an UIDL as U...
متن کاملA participatory framework to support inclusive multi-playing for gamers in disadvantaged conditions
As a consequence of the increasing mobile and multimodal devices diffusion, on-line gaming is more and more often considered as a dimension of an anytime, anywhere and anyone spaces. Providing accessible games is one of game industry and academic research purposes, in order to comply with the anyone issue in spite of disability. Such an issue assumes a crucial aspect when people with disabiliti...
متن کاملSpeech-Gesture Driven Multimodal Interfaces for Crisis Management
Emergency response requires strategic assessment of risks, decisions, and communications that are timecritical while requiring teams of individuals to have fast access to large volumes of complex information and technologies that enables tightly coordinated work. The access to this information by crisis management (CM) teams in emergency operations centers can be facilitated through various hum...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Human-Computer Interaction
دوره 12 شماره
صفحات -
تاریخ انتشار 1997